Posts Tagged ‘frameworks’

One of the challenges of working with software is that one is rarely building something from scratch. Usually, a team is working across a mix of maintaining existing (sometimes old) code; refactoring/migrating old concepts to new places; experimenting with new ideas; and/or just changing things to make life better. This can make it really hard to decide just how much effort needs to be put into a piece of work. Does it need to be the all-singing-dancing-tested story? Or will a cowboy-hack for fast feedback work just as well? And what does it mean to cowboy-hack in the financial industry vs for a game used by ten of my family members?

I work with a team that is working in the financial industry for a fairly high profile client that is nothing less than obsessed with their brand and reputation. Which is a good thing to worry about when people are trusting us with their life savings. We’re also busy migrating our web offering to a new technical platform, except it’s not purely a migration because we’re also extending our current offering to work on mobile devices, so there’s experimentation and user testing required too. In this environment, it’s very hard to deploy anything that hasn’t got a solid safety net (automated testing) whether it’s feature toggled on or not. In this environment, the focus is on keeping things safe rather than learning fast. And, sometimes, that leads to us over-engineering solutions when we’re not actually even sure we’re going to keep them. And that all means sometimes learning is very expensive and usually takes a long time.

With that context, we have recently had some conversations as a team about how we can narrow down our options and learn more quickly and cheaply for those pieces where we really don’t know what we need to build. A lot of good questions came out of that session, including one about when can we just hack something together to test a theory and when should we be applying due diligence to ensure code is “safe” and tested. Here are my thoughts (using two well-known and useful models – especially when used together) for thinking about when it is OK to experiment without worrying about things that keep us “safe”, and when we need to keep “safety” in mind while we iterate over an idea.

(In case it isn’t clear, there are cheaper ways than writing code to test out an idea – and those would be first prize. However, at some point you probably will need to build something to progress the fact-finding of ideas that have passed the through the first few experiment filters.)

The first model is fairly well known (and you can find a full explanation here):

MVP kniberg

The second is one I’ve newly learned about, and using them together was a great a-ha moment for me 🙂

 

Hierarchy of User Needs

 

How “safe” you need to keep stuff depends on where in the learning phase you are and what your experiment outcomes relate to.

The first phase is testing an idea. This is figuring out if something will even work at all. Will someone like it enough to buy it or pay for it? Do we even have the skills to use this? Does that third-party plugin do all the special things we need it to? At this point, you want to learn as quickly as possible whether or not your idea is even feasible. You may never even use it/build it. If you can even avoid writing code (the book “The Lean Startup” is full of great examples), then you should. It’s about getting as short a feedback loop going as possible. In the diagram, the person wanted something to help them got from A to B. One of the things that was tested was a bus ticket.

The second phase is building something usable. This speaks to the two bottom layers (and some of the third) of the hierarchy of user needs triangle above. Here we probably do need to keep our code safe because it’s not really usable (properly usable) until it is released. We will still be experimenting and learning, but some of those experiments will be around how does this work “in the wild” – e.g. from a user perspective, a performance perspective, or a feature perspective. Whatever you build in this phase you’re probably going to mostly keep and iterate over (because you’ve already tested the idea is valid), so you do have to consider maintenance (and, thus, care about things like automated testing) in this phase. At a micro level, within this phase, there probably will still be ideas that you need to test out and most of the experiments are more likely to be around how rather than what, but there is a blend of both.

The final phase – tackling the rest of the pyramid – includes adding in those “special effects” but ALSO incorporating more of the nice-to-have feedback from users that you should have gained from all your experiments along the way. In the transport example, the car at the end is a convertible rather than a sedan. A sedan would have been perfectly usable, however at some point when they were testing out ideas, the user mentioned that the one reason they preferred the skateboard over the bus ticket is that they liked to feel the fresh air in their face. That kind of feedback is what makes things lovable/special – although in no way does it may it more usable. FYI most organisations running an iterative approach often choose to stop once they get here, due to the law of diminishing returns. The problem with waterfall is because everything was done big bang, you didn’t have the option of stopping when you’d “done enough” (because there were no feedback loops to figure out when done enough was).

Depending on a team’s context, we could be anywhere on this journey depending on which idea or feature we are talking about. Which is why the WHY becomes super important when we’re discussing a product backlog item. WHY are we doing this piece of work? WHAT do we need to get out of it? WHAT is the assumption that we want to test? And what does that mean for how much effort and rigor we need to apply to that piece of work?

Besides helping teams identify what kind of work they should be focusing on, these two models (especially the user needs one) have been helpful to me when having conversations of where UX and UI fit into the Scrum team mix. If you think of UX as part of making things basically usable (the bottom part of that layer if you will); a combination of UX/UI moving you into the pleasurable layer; and UI creating the cherry-on-top – then it’s easier to see when just building to a wireframe is OK (to get feedback on functional and reliable) vs having something pixel perfect with animations (that incorporates what we learned from testing functional, reliable features).

Or, roughly:

User Needs wth UI-UX

Let me know if you find these two models helpful. Have you used them before? Where have you found that they didn’t translate that well into a real life scenario? What did you learn?

#SGZA 2016: “Just Right”

Posted: November 17, 2016 in Agile, Team
Tags: , , , ,

I recently attended the regional Scrum Gathering for 2016 in Cape Town. This is a post in a series on the talks that I attended. All posts are based on the sketch-notes that I took in the sessions. 

Danie Roux gave an entertaining opening keynote which started off with a re-telling of the well-known fairy tale: Goldilocks and the Three Bears. We also touched on the adventures of Cinderella (and her glass slipper) and the Hunchback of Notre Dame during the talk. Danie challenged us to consider the modern versions of the fairy tales (Shu) against the logic they contained (Ha – or huh?) and their actual origins in history (Ri). Besides some fascinating facts about the origins of some fairy tales, other take-outs from his talk were:

  1. Perspective matters.
  2. Roles are meaningless on their own – they need to be considered in the context of a relationship.
  3. A cadence is a pause. Pauses between notes create music.
  4. The three hardest things to get a team to do are: (1) Talk (2) Talk to each other; and (3) Talk to each other about work.
  5. The definition of ScrumBut: (ScrumBut)(Reason)(Workaround)
    1. Translation: When we say Scrum But we usually go “this is what Scrum would recommend”, “but this is why it won’t work for us”, “so this is what we’ll do instead”
    2. Perhaps we should try for Scrum And?

Finally, he told us the story of his friend and the glass Sable antelope… As a reminder that when we give someone a gift, we cannot be upset with what they do with it (even if they destroy it), regardless of what we invested in getting it for them.

Some references from the talk:

Anything in there that you found interesting?

Hindsight is 20/20

Posted: December 2, 2015 in Team
Tags: , , ,

There has been a lot of change in my space in the past year. A lot of it hasn’t been managed very well (which creates a lot of ‘people noise’). I’ve been exposed to a number of change management models over the years, including ADKAR and this cool exercise. I quite liked this tool about Levers of Influence which one of our People Operations (a.k.a. HR) team members shared with us. Although I already knew we’d done very badly when it came to change management, when I reviewed the two biggest changes (moving to Feature Teams and reducing our release cycle to having a release window every month rather than a synchronised release every nine-ish weeks), the tool helped highlight examples of what we had done badly, which also meant we could see where we needed to focus our efforts from a recovery perspective.

Levers of Influence

Levers of Influence

This is my retrospective on the change relating to monthly releases and what we did, did not, and should have (probably) done.

1. A Compelling Story

The idea to move to monthly release cycles had been brewing in the senior heads for a while, however we were going through a major upgrade of one of our core systems (in a very waterfall fashion) which meant that anything unrelated to that was not really discussed (to avoid distractions). Our upgrade was remarkably smooth and successful (considering its size and time span) and about two weeks after we went live with it, senior management announced that the release cycle was changing. Not many people had seen this coming and the announcement was all of two sentences (and mixed in with the other left-field announcement of the move to vertical feature teams rather than systems-based domain teams). In hindsight, most people didn’t know what this shorter release cycle meant (or what was being asked of us). Nor was the reason why we were making this change well communicated (so, to most people, it didn’t really make sense).

In hindsight:

  1. The why for the change should have been better communicated. We want to be able to respond more quickly and move to a space where we can release when we’re ready.
  2. The impact should have been better understood. One of the spaces that has most felt the pain is our testers. With a lack of automation throughout our systems and the business IP sitting with a few, select people, the weight of the testing has fallen onto a few poor souls. Combined with this, our testing environments are horrible (unstable, not Production-like, and a pain to deploy to), so merely getting a testing environment that is in a state to test requires a lot of effort across the board.
  3. We should have explored the mechanics/reasons in more detail with smaller groups. For example, it was about 3-4 months before people began to grasp that just because one COULD release monthly, it didn’t mean that one had to. The release window was just that: a window to release stuff that was ready for Production. If you had to skip a release window because you weren’t ready, then that was OK. (A reminder here that our monthly release ‘trains’ are an interim/transition phase – we ultimately want to be able to release as often as we like.)

2. Reinforcement mechanisms

One of the motivating factors for senior management to shorten the release cycle from nine weeks to one month was that our structures, processes and systems for releasing were monolithic and, although the intention had always been to improve the process, it just wasn’t happening. In a way, they created the pain knowing full well that we didn’t have the infrastructure in place to support it, because they wanted to force teams to find ways to deal with that pain. So, in this case, there weren’t any reinforcement mechanisms at all. The closest thing we had was a team that was dedicated to automating deployments across the board that had worked together for about six weeks before the announcement.

In hindsight:

  1. There should have been greater acknowledgement of the fact that we didn’t have the support structures, etc. in place to support the change.
  2. We shouldn’t have done the release cycle and Feature Team change at the same time (as there weren’t structures, processes and systems in place to support that change either).
  3. We should have been more explicit about the support that would be provided to help people align structures, systems and processes to the change.

3. Skills required for change

Shortening the release cycle certainly created opportunities for people to change their behaviour (whether in a good or bad way). Unfortunately most of our teams didn’t have the skill sets to cope with the changes: we were lacking in automation – both testing and deployment – skills. Throwing in the Feature team change with its related ‘team member shuffle’ also meant that some teams were left without the necessary domain knowledge skills too.

In hindsight:

  1. We should have understood better what skills each team would need to benefit from the opportunities in the change.
  2. We should have understood the gaps.
  3. We should have identified and communicated how we would support teams in their new behaviours and in gaining new skill sets.

4. Role modeling

Most people heard the announcement and then went back to work and continued the way they had before. When the change became tangible, they tried to find ways to continue doing what they did before. (A common response in the beginning to “why are we doing this” would be “that’s the way we’ve always done it”.) The leaders who had decided to enforce the change were not involved operationally, so could not be role modelled. Considering the lack of skills and reinforcement mechanisms, role models were few and far between.
In hindsight:

  1. If we’d covered the other levers better, we would have had people better positioned to act as role models
  2. Perhaps we should have considered finding external coaches with experience in this kind of change to help teams role model
  3. Another option may have been to have had a pilot team initially that could later act as a role model for the rest of the teams

 

Have you ever had a successful change? If so, which levers did you manage well? Which ones did you miss?

77fb900d0923c2cdff00ab346fa453abOnce upon a time there was a team called the Kanban team. But they didn’t do Kanban. They had a task board and they tracked their work, but that was it. The Kanban team also had a new agile coach. This new coach wasn’t very familiar with Kanban, but did some research and realised that the team was missing some basics. The team met and did some exercises and reviewed their processes and board and made some changes – changes supporting Kanban basics. But these changes didn’t stick. And the team didn’t really like the new board. And the team didn’t feel empowered to improve on the new board. The fact that the lanes were taped onto the whiteboard also meant it was hard to change things on the fly, so in some cases the team could also not be bothered.

The agile coach was frustrated. The team was not learning and they were not growing. Their environment was changing and they were not adapting. They didn’t actually want to do Kanban. They had a broken information radiator. The agile coach decided to watch and wait. The more she watched the more she realised what the gaps were – where the team needed feedback – but she was at a loss as to how to resolve those gaps. She had conversations with certain key team members and stakeholders. She shared her concerns and observations. She created awareness through conversation. However, the team were not yet ready to tackle these things. They needed some guidance and direction. So she waited. And researched. And discussed. And watched some more. And then another agile coach recommended she read “Lean from the Trenches”. And she realised that this was what she had been searching for.

Having learnt from the first attempted iteration of the board, the agile coach decided on a different approach to presenting the changes. Thankfully these changes could coincide with a team reshuffle: the Kanban team would be no more. There would be a new team – a mix of old and new team members. And the incoming team members were familiar with Scrum and simple task boards. This provided a ‘good excuse’ to advocate some changes to the information radiator. So what she did was:

  1. Use the ideas from “Lean from the Trenches” together with her observations of how the team worked and the type of work that they tackled to draft a new task board. She made sure that the new task board still included the information the team found valuable on the current board – even where it was currently hidden or somewhat confusing. She also included some basic changes she hoped would help generate feedback to guide team-driven improvements.
  2. She shared her board with team members individually. She asked them what their thoughts were. She asked them for questions and feedback. She was happy to see that they quickly connected the new and the old and found the new version simpler to understand.
  3. She shared that she was going to remove the permanent lines. The lines would be drawn with whiteboard markers. It would be easy to change the board. Things would be more flexible.

Eventually the big day arrived. The agile coach and one of the analysts mapped the existing stories and tasks to the new board and then the hard work of ripping down the old board began. Come Monday morning, the new board was ready for stand-up. There were still some questions and some of the stories/tasks moved a bit during the session, but even from day one the process was working better:

  1. The team solved discrepancies on the board themselves. With the previous board iteration, they had turned to the agile coach when they had a question or weren’t sure how to use their information radiator instead of finding a solution for themselves.
  2. There were already suggestions from the team around how to tweak the board. They were already taking on ownership of their information radiator.
  3. It was VERY visible that work-in-progress was piling up on one of the key stories – and that there were in progress stories or tasks that were not currently being worked on by anyone.

The key take-aways?

  • Don’t be afraid to try something new.
  • Be ready for failure – ideas won’t always work the first time round, but that’s how one learns.
  • Persevere: if at first you don’t succeed; try, try again.
  • And once it’s initiated, let it go – ultimately the team needs to own the change.

What changes have you struggled then succeeded to make in your team recently? What techniques eventually led to your success?

Dual Track Scrum

Dual Track Scrum

This week I stumbled across an article which referenced something called “Dual Track Scrum” (see the links below). Intrigued, I researched a little more and was fascinated to discover that there was a documented process/approach to product development and delivery that was very similar to what had evolved for a product team I previously worked with.

This isn’t the first time this has happened to me. The last significant deja vu experience was when I found that there was a name for a software development approach that included: time boxing; daily team check-ins; planning and estimation as a team; work defined on a physical card on a white board; a definition of done for the time box that included all delivery activities; and a list of features that could be traded in and out if not yet started. Yes, that was the day I discovered that the approach our team had “created” to successfully deliver an off-shore project was, actually, called Scrum.

Links: