Posts Tagged ‘development’

I recently attended the regional Scrum Gathering for 2016 in Cape Town. This is a post in a series on the talks that I attended. All posts are based on the sketch-notes that I took in the sessions. 

A lot of this talk was a repeat of things I’ve heard before:

  • Efficient feedback increases effectiveness
  • We need to build things to learn things through measuring
  • We need to start with understanding user needs
  • When we define a problem, we’re usually guessing, so we need to validate our guesswork (through feedback loops) as quickly as we can

A wicked problem is a problem that one can only fully understand or define once one has found the solution to the problem. Software in a wicked problem: when we find the solution (note: find not build), then we can usually answer what the problem actually was.

One of the new ideas for me from this talk was the #ConstructionMetaphorMustFall idea. Traditional engineering models treat coding as part of the build process, however Jacques argued that code should actually be treated as a design document and that writing code actually forms part of the creative design process. The code equivalent of bricks and mortar is actually only the conversion of code to bits together with the (hopefully automated) checking process i.e. when we release. In this model, things like Continuous Integration and Continuous Deployment are actually design enablers: they allow us to test our theories and verify our design through feedback.

Shifting your mindset to view writing code as part of the experimental design process rather than execution of a final plan/design would probably lead to a paradigm shift in other areas too. I’ve been saying for a while that, as software tooling improves and more “routine” activities become automated, writing software is becoming less of a scientific engineering field and more of a creative field.

What are your thoughts on shifting the “coding” activity out of the build phase and into the design phase? What does this mean for how we should be thinking about code and feedback?

Recommended reading: SPRINT

Advertisements

I recently attended the regional Scrum Gathering for 2016 in Cape Town. This is a post in a series on the talks that I attended. All posts are based on the sketch-notes that I took in the sessions. 

As we’re also looking to move a very complicated, fragile system over to a new technology stack, I found this talk by @NigelBasel quite interesting. One of the most interesting parts was his application of Conway’s Law to the problem: the software had evolved and modeled when there were only a handful of developers working on the system; and now it needed to change to model the communication structures required between a number of teams working on the same codebase. He also showed us the output from a really cool tool (I think) called Gource which he’d used to model changes in their source code repository over time. Made me wish I could see the same animation for ours! I’m sure it would be fascinating!

Nigel gave a suggestion of two steps one could take when faced with this legacy code-wool problem:

  1. Stop digging (yourself into the hole)
  2. Get out of the hole (carefully)

They’re still progressing on their journey, but these are the steps they’ve taken so far to try and bring their code base back under control:

  1. They identified and fixed any coincidental cohesion – they moved parts of code which logically belonged together and just happened to be where they were because they were written into things like libraries and services.
  2. They shifted their thinking of layers to services and considered things like whether certain services like authentication be provided by 3rd party tools and removed from their code base. If a service seemed to make sense, they created it and then migrated functions as they were needed, thereby “starving” the old code base.
  3. The considered their code base in terms of business features and the data required and tried to group these together
  4. They write all their new code using their new strategy (as far as possible)

One issue Nigel admitted that they haven’t really got to grips with yet was version control. He emphasised that their journey is not yet done. I’m hoping we will hear more about their adventures and learnings once they have traveled their path. Did you find any of these points helpful? Do you have experience changing old code to reflect new organisational communication patterns?

This is a post in a series on the talks that I attended at my first ever Agile Africa Conference in Johannesburg. All posts are based on the sketch-notes that I took in the sessions. 

I was hoping the video for this talk would be available before I got round to blogging about it, however unfortunately it isn’t 🙂 I’ll update this post with a link once it is available, because I don’t feel my notes will really do the talk by Neal Ford full justice.

agileafrica

The talk kicked off with a couple of tales about QWERTY keyboards and the legacy that remains. Most people have heard of the fable that QWERTY keys are arranged to help typists type more slowly to prevent jamming (apparently not true…), however how many of us have ever considered the underscore? Sure, we use it now, because it’s there, but did it  really make sense to port if over to modern-day keyboards when it was originally there to enable manual typists to underline pieces of text? Perhaps not 🙂

To Neal, anti-patterns are wolf ideas in sheep’s clothing: they are ideas that look like a good idea at the time, but sometimes turn out to be a terrible idea. He then went on to describe six practices that have potentially proven to create anti-patterns over time:

1. Abstraction Distraction

Abstraction seemed like a clever idea to black box certain functions, however as the abstraction layers increase, things become increasingly complicated and confusing, eventually resulting in so much abstraction that the code is almost unusable and people are afraid to go in and make changes because they have no idea what will break

2. Code Reuse Abuse

Code reuse in principle, is good, but it can also lead to a whole bunch of dependencies that need to be managed.

3. Overlooking Dynamic Equilibrium

Things change all the time – and they change at different paces – so it’s really hard to plan an architecture for the future because you don’t know which parts will change when and at what pace. It’s no longer possible to be predictable, so rather focus on being adaptable.

4. Dogma

“Do it because I said so” or “do it because that’s how it’s always been done”

5. Sabotage
  • Cowboy coding
  • Focusing on cool stuff rather than necessary stuff
  • Inventing cool puzzles to solve
6. Forgetting that some things scale non-linearly
  • Examples are code, teams, and tools over time.
  • As the size increases, the pain increases exponentially
  • Tools have life-cycles and Neal recommended that when you are using a tool and it starts becoming painful to use, the you should probably start looking for a new tool

Neal also recommended a book called “Evolutionary Architectures” and mentioned some key concepts to bear in mind:

  • Components are deployed
  • Features are released
  • Allow for incremental change at an architectural level
  • Embrace experimentation and remember to include users in your feedback loop

And I’ll leave you with one last thought from the talk: Is Transitive Dependency Management tomorrow’s GoTo anti-pattern?

LINKS
nealford2

Thanks to sketchingscrummaster.com

I recently attended the regional Scrum Gathering for 2015 in Johannesburg. This is a post in a series on the talks that I attended. All posts are based on the sketch-notes that I took in the sessions. 

devops

I was extremely interested in hearing Biase talk about the journey a large bank in South Africa has recently embarked on to realise the benefits of integrated DevOps. Sadly for me, as we have many of the same challenges, their journey is not yet complete so they haven’t yet answered all the questions! I guess no one will ever have the perfect answer, however some recommendations would have been helpful 🙂

Their journey began with a pilot to try to realise the benefits of a team owning the entire value stream and not having different hand-offs between delivery, release management, and support. Some of the benefits of doing this would include:

  • Improved quality
  • Increased knowledge sharing
  • Increased organisational effectiveness
  • Shorter time to market
  • The ability to deploy faster with fewer failures

Their pilot initially consisted of two teams:

  1. A feature team (building the features); and
  2. A DevOps team (building tools to support CI and deployments for the feature team)

The feature team prioritised the work that the DevOps team did and their working relationship was governed by a set of principles and a working agreement. Apparently, through this experiment, they have realised that having two teams doesn’t really work and it is better to integrate DevOps skills into each feature team. Their challenge now is there aren’t enough DevOps skills available for the number of teams that they currently have, so they are trying to find ways to change that. Rather than taking a push approach, they are trying pull techniques like hackathons, demodays and gamification, to encourage the Feature Teams to build the skills from within.

Biase highlighted a number of challenges they experienced at the start of their journey and also the value of finding experts to help teams work through the technical issues. Their next set of experiments on this journey are related to

  • Growing skills from the ground up
  • Creating the necessary culture shift
  • Allowing for organic growth

I look forward to hearing more about their journey to come. What is your experience in including DevOps skills in your cross-functional feature teams?

One of my teams has recently expressed a desire to become more familiar with XP practices, including more formalised pairing and TDD, and when I offered them the option of doing a code kata during one of our retrospective slots, they were happy to give it a go.

Kata

 

Last year I was fortunate enough to attend a kata at the annual South African Scrum Gathering. The session included some information on what a kata is, some advice on how to run one, plus actually pairing up and practicing. Thankfully the kata used during the Scrum Gathering session was a TDD/Pairing Kata, so I was able to use one that I was familiar with for our first session.

Prep

Preparation entailed
– Sending out some kata context (as not everyone knew what they were)
– Having the team select a development and testing framework and make sure it was all set up
– Finding a suitable room (easier said than done, as it turned out!)
– Sending out some pre-reading about pair programming, particularly the roles and red-green-refactor

From my side, I had to source the kata (thank you, Google) and also enlist the help of some co-facilitators. For this I employed the services of another Scrum Master (who was familiar with katas) and our team architect (who is an advocate of TDD and pairing practices). As you will see from the team’s feedback, having the right coaches/advisors in the room added significant value to the exercise, especially if the majority of your team members are new to the techniques being practiced.

The Session

The Agenda

The Agenda

My session started with an overview of the kata concept and I repeated some key points around TDD and pairing. This probably would not have been necessary if your team is quite good at doing their pre-reading. I did get some feedback that there was too much information and too many instructions up-front, which is probably because both the concept (katas) and the topics (TDD/Pairing) were new ones. In hindsight, some way of splitting the learning probably would have been less confusing. I also printing some of the key TDD/Pairing slides on the back of the kata exercise (although I didn’t notice anyone actually referring back to them during the session). It is important to emphasise that the kata is NOT about how quickly/efficiently you solve the problem, but more about practicing the process to get there. Thankfully, although my team can be quite competitive, I think everyone grasped this.

I decided to run the same kata twice as part of the goal was to actually practice the process itself and one could see the team became a lot more comfortable by the time we ran the second round. We only had an hour, so I opted for 15 minutes per round, which unfortunately meant the round ended just as most teams were hitting ‘flow’. Ideally one would run the kata for half an hour, but in the situation where a team is doing a kata for the first time, I’m not sure whether having one thirty minute Kata is better than two shorter katas to get them familiar. Now that my team have done one, I’d be very comfortable running a single thirty minute session.

The other thing I did was mix up the pairs between sessions. This worked well because people got to see how other teams had approached the same problem plus for more ‘reluctant’ participants (mostly the non-developers), they warmed up by the second round when they realised that most of the team were enjoying the process.

Feedback

 

3Hs

I wrapped up the session with a quick 3H (Helped, Hindered, Hypothesis) exercise to get some feedback for myself on how the session went. Overall, I think it went well and it was nice to see the level of interaction and energy it generated. Generally the feedback was good too, so hopefully I’ll have an opportunity to run more of these in the future. In case you’re thinking of running one yourself, here are some of the things the team felt

  • Helped them:
    – Having an early example (built into our selected kata)
    – Coaches for technical and ‘process’ support
    – Certain team members having experience with the techniques we were practicing
    – Switching partners during the exercise
    – Having the developers write the code (based on instructions) even when they weren’t driving
    – Pairing with someone who thought differently to you
    – Learning in pairs
  • Hindered them:
    – Testing framework (the team could pick what they wanted to use)
    – Too little time
    – Inexperience with katas
    – Initial example was not very clear
    – Non developers lacked context

Links

What have you learnt from running katas? Are there any you would particularly recommend?

 

Just for fun: bugs

Posted: July 17, 2014 in Team
Tags:

Love this picture!

Bugs

“Refactoring is like tidying up at home. If every time I come back from a shopping or from a business trip, I sling down my things and don’t put them away, pretty soon my house is a mess. I can’t find anything, I may end up buying new items because I can’t find the ones I know I already have. It becomes more difficult to move around the house – there are piles of stuff everywhere! I may even break something because it’s obscured by other stuff on top of it. 

Refactoring is the necessary act of putting code in the right place, where other developers can find it quickly and easily. It’s keeping code organised and decluttered. Developers need to do refactoring, or they can end up with the same code in several places, which takes more effort to maintain. Refactoring is not the aesthetic organisation of the code, such as applying feng shui to your home – it’s basic housekeeping.”

Capture

Agile Coaching – Rachel Davies & Liz Sedley