Posts Tagged ‘feedback’

One of the challenges of working with software is that one is rarely building something from scratch. Usually, a team is working across a mix of maintaining existing (sometimes old) code; refactoring/migrating old concepts to new places; experimenting with new ideas; and/or just changing things to make life better. This can make it really hard to decide just how much effort needs to be put into a piece of work. Does it need to be the all-singing-dancing-tested story? Or will a cowboy-hack for fast feedback work just as well? And what does it mean to cowboy-hack in the financial industry vs for a game used by ten of my family members?

I work with a team that is working in the financial industry for a fairly high profile client that is nothing less than obsessed with their brand and reputation. Which is a good thing to worry about when people are trusting us with their life savings. We’re also busy migrating our web offering to a new technical platform, except it’s not purely a migration because we’re also extending our current offering to work on mobile devices, so there’s experimentation and user testing required too. In this environment, it’s very hard to deploy anything that hasn’t got a solid safety net (automated testing) whether it’s feature toggled on or not. In this environment, the focus is on keeping things safe rather than learning fast. And, sometimes, that leads to us over-engineering solutions when we’re not actually even sure we’re going to keep them. And that all means sometimes learning is very expensive and usually takes a long time.

With that context, we have recently had some conversations as a team about how we can narrow down our options and learn more quickly and cheaply for those pieces where we really don’t know what we need to build. A lot of good questions came out of that session, including one about when can we just hack something together to test a theory and when should we be applying due diligence to ensure code is “safe” and tested. Here are my thoughts (using two well-known and useful models – especially when used together) for thinking about when it is OK to experiment without worrying about things that keep us “safe”, and when we need to keep “safety” in mind while we iterate over an idea.

(In case it isn’t clear, there are cheaper ways than writing code to test out an idea – and those would be first prize. However, at some point you probably will need to build something to progress the fact-finding of ideas that have passed the through the first few experiment filters.)

The first model is fairly well known (and you can find a full explanation here):

MVP kniberg

The second is one I’ve newly learned about, and using them together was a great a-ha moment for me 🙂

 

Hierarchy of User Needs

 

How “safe” you need to keep stuff depends on where in the learning phase you are and what your experiment outcomes relate to.

The first phase is testing an idea. This is figuring out if something will even work at all. Will someone like it enough to buy it or pay for it? Do we even have the skills to use this? Does that third-party plugin do all the special things we need it to? At this point, you want to learn as quickly as possible whether or not your idea is even feasible. You may never even use it/build it. If you can even avoid writing code (the book “The Lean Startup” is full of great examples), then you should. It’s about getting as short a feedback loop going as possible. In the diagram, the person wanted something to help them got from A to B. One of the things that was tested was a bus ticket.

The second phase is building something usable. This speaks to the two bottom layers (and some of the third) of the hierarchy of user needs triangle above. Here we probably do need to keep our code safe because it’s not really usable (properly usable) until it is released. We will still be experimenting and learning, but some of those experiments will be around how does this work “in the wild” – e.g. from a user perspective, a performance perspective, or a feature perspective. Whatever you build in this phase you’re probably going to mostly keep and iterate over (because you’ve already tested the idea is valid), so you do have to consider maintenance (and, thus, care about things like automated testing) in this phase. At a micro level, within this phase, there probably will still be ideas that you need to test out and most of the experiments are more likely to be around how rather than what, but there is a blend of both.

The final phase – tackling the rest of the pyramid – includes adding in those “special effects” but ALSO incorporating more of the nice-to-have feedback from users that you should have gained from all your experiments along the way. In the transport example, the car at the end is a convertible rather than a sedan. A sedan would have been perfectly usable, however at some point when they were testing out ideas, the user mentioned that the one reason they preferred the skateboard over the bus ticket is that they liked to feel the fresh air in their face. That kind of feedback is what makes things lovable/special – although in no way does it may it more usable. FYI most organisations running an iterative approach often choose to stop once they get here, due to the law of diminishing returns. The problem with waterfall is because everything was done big bang, you didn’t have the option of stopping when you’d “done enough” (because there were no feedback loops to figure out when done enough was).

Depending on a team’s context, we could be anywhere on this journey depending on which idea or feature we are talking about. Which is why the WHY becomes super important when we’re discussing a product backlog item. WHY are we doing this piece of work? WHAT do we need to get out of it? WHAT is the assumption that we want to test? And what does that mean for how much effort and rigor we need to apply to that piece of work?

Besides helping teams identify what kind of work they should be focusing on, these two models (especially the user needs one) have been helpful to me when having conversations of where UX and UI fit into the Scrum team mix. If you think of UX as part of making things basically usable (the bottom part of that layer if you will); a combination of UX/UI moving you into the pleasurable layer; and UI creating the cherry-on-top – then it’s easier to see when just building to a wireframe is OK (to get feedback on functional and reliable) vs having something pixel perfect with animations (that incorporates what we learned from testing functional, reliable features).

Or, roughly:

User Needs wth UI-UX

Let me know if you find these two models helpful. Have you used them before? Where have you found that they didn’t translate that well into a real life scenario? What did you learn?

I recently attended the regional Scrum Gathering for 2016 in Cape Town. This is a post in a series on the talks that I attended. All posts are based on the sketch-notes that I took in the sessions. 

A lot of this talk was a repeat of things I’ve heard before:

  • Efficient feedback increases effectiveness
  • We need to build things to learn things through measuring
  • We need to start with understanding user needs
  • When we define a problem, we’re usually guessing, so we need to validate our guesswork (through feedback loops) as quickly as we can

A wicked problem is a problem that one can only fully understand or define once one has found the solution to the problem. Software in a wicked problem: when we find the solution (note: find not build), then we can usually answer what the problem actually was.

One of the new ideas for me from this talk was the #ConstructionMetaphorMustFall idea. Traditional engineering models treat coding as part of the build process, however Jacques argued that code should actually be treated as a design document and that writing code actually forms part of the creative design process. The code equivalent of bricks and mortar is actually only the conversion of code to bits together with the (hopefully automated) checking process i.e. when we release. In this model, things like Continuous Integration and Continuous Deployment are actually design enablers: they allow us to test our theories and verify our design through feedback.

Shifting your mindset to view writing code as part of the experimental design process rather than execution of a final plan/design would probably lead to a paradigm shift in other areas too. I’ve been saying for a while that, as software tooling improves and more “routine” activities become automated, writing software is becoming less of a scientific engineering field and more of a creative field.

What are your thoughts on shifting the “coding” activity out of the build phase and into the design phase? What does this mean for how we should be thinking about code and feedback?

Recommended reading: SPRINT